a limited memory adaptive trust-region approach for large-scale unconstrained optimization

نویسندگان

m. ahookhosh

faculty of mathematics‎, ‎university of vienna‎, ‎oskar-morge-nstern-platz 1‎, ‎1090 vienna‎, ‎austria. k. amini

department of‎ ‎mathematics‎, ‎razi university‎, ‎kermanshah‎, ‎iran. m. kimiaei

department of‎ ‎mathematics‎, ‎asadabad branch‎, ‎islamic azad university‎, ‎asadabad‎, ‎iran. m. r. ‎peyghami

k.n. toosi university of department of‎ ‎mathematics‎, ‎k‎. ‎n‎. ‎toosi university of technology‎, ‎p.o‎. ‎box 16315-1618‎, ‎tehran‎, ‎iran.

چکیده

this study concerns with a trust-region-based method for solving unconstrained optimization problems. the approach takes the advantages of the compact limited memory bfgs updating formula together with an appropriate adaptive radius strategy. in our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-newton formulas helps to handle large-scale problems. theoretical analysis indicates that the new approach preserves the global convergence to a first-order stationary point under classical assumptions. moreover, the superlinear and the quadratic convergence rates are also established under suitable conditions. preliminary numerical experiments show the effectiveness of the proposed approach for solving large-scale unconstrained optimization problems.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A limited memory adaptive trust-region approach for large-scale unconstrained optimization

This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newt...

متن کامل

Limited-Memory Reduced-Hessian Methods for Large-Scale Unconstrained Optimization

Limited-memory BFGS quasi-Newton methods approximate the Hessian matrix of second derivatives by the sum of a diagonal matrix and a fixed number of rank-one matrices. These methods are particularly effective for large problems in which the approximate Hessian cannot be stored explicitly. It can be shown that the conventional BFGS method accumulates approximate curvature in a sequence of expandi...

متن کامل

an adaptive nonmonotone trust region method for unconstrained optimization problems based on a simple subproblem

using a simple quadratic model in the trust region subproblem, a new adaptive nonmonotone trust region method is proposed for solving unconstrained optimization problems. in our method, based on a slight modification of the proposed approach in (j. optim. theory appl. 158(2):626-635, 2013), a new scalar approximation of the hessian at the current point is provided. our new proposed method is eq...

متن کامل

A Trust-region Method using Extended Nonmonotone Technique for Unconstrained Optimization

In this paper, we present a nonmonotone trust-region algorithm for unconstrained optimization. We first introduce a variant of the nonmonotone strategy proposed by Ahookhosh and Amini cite{AhA 01} and incorporate it into the trust-region framework to construct a more efficient approach. Our new nonmonotone strategy combines the current function value with the maximum function values in some pri...

متن کامل

Large Scale Unconstrained Optimization

This paper reviews advances in Newton quasi Newton and conjugate gradi ent methods for large scale optimization It also describes several packages developed during the last ten years and illustrates their performance on some practical problems Much attention is given to the concept of partial separa bility which is gaining importance with the arrival of automatic di erentiation tools and of opt...

متن کامل

A retrospective trust-region method for unconstrained optimization

We introduce a new trust-region method for unconstrained optimization where the radius update is computed using the model information at the current iterate rather than at the preceding one. The update is then performed according to how well the current model retrospectively predicts the value of the objective function at last iterate. Global convergence to rstand second-order critical points i...

متن کامل

منابع من

با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید


عنوان ژورنال:
bulletin of the iranian mathematical society

جلد ۴۲، شماره ۴، صفحات ۸۱۹-۸۳۷

میزبانی شده توسط پلتفرم ابری doprax.com

copyright © 2015-2023